# RoBERTa Architecture
Medical Ner Roberta
A medical domain named entity recognition model based on the RoBERTa architecture, used to identify specific entities from medical texts
Sequence Labeling
Transformers

M
nairaxo
58
1
Parrots Chinese Roberta Wwm Ext Large
Apache-2.0
Chinese pre-trained model based on RoBERTa architecture, supporting text-to-speech tasks
Large Language Model
Transformers Chinese

P
shibing624
76
2
AIGC Detector Env2
Apache-2.0
An AI-generated text detector based on multi-scale positive unlabeled detection method, used to identify AI-generated text content.
Text Classification
Transformers

A
yuchuantian
94
5
Drbert 7GB
Apache-2.0
DrBERT is a French RoBERTa model trained on the open-source French medical text corpus NACHOS, specializing in the biomedical and clinical fields
Large Language Model
Transformers French

D
Dr-BERT
4,781
13
Danskbert
Danish BERT is a language model optimized for Danish, excelling in the Danish ScandEval benchmark.
Large Language Model
Transformers Other

D
vesteinn
151
6
Xlm Roberta Base Finetuned Panx All
MIT
A multilingual named entity recognition model fine-tuned on the PANX dataset based on the xlm-roberta-base model
Large Language Model
Transformers

X
okite97
15
0
Roberta Base Finetuned Squad
MIT
A question-answering model fine-tuned on the SQuAD 2.0 dataset based on RoBERTa-base, designed to answer questions based on given text
Question Answering System
Transformers

R
janeel
16
0
Kosimcse Roberta Multitask
A Korean semantic similarity calculation model optimized based on the RoBERTa architecture, achieving high-performance sentence embeddings through multi-task learning
Text Embedding
Transformers Korean

K
BM-K
37.37k
23
Legalbert Large 1.7M 1
A BERT large model pretrained on English legal and administrative texts using RoBERTa pretraining objectives
Large Language Model
Transformers English

L
pile-of-law
120
14
Icebert Xlmr Ic3
An Icelandic masked language model based on the RoBERTa-base architecture, fine-tuned from xlm-roberta-base
Large Language Model
Transformers Other

I
mideind
24
0
Icebert Ic3
Icelandic masked language model trained on RoBERTa-base architecture using the fairseq framework
Large Language Model
Transformers Other

I
mideind
16
0
Camembert2camembert Shared Finetuned French Summarization
This model is a French text summarization model based on the CamemBERT architecture, specifically fine-tuned for French news summarization tasks.
Text Generation
Transformers French

C
mrm8488
540
14
Quora Roberta Base
MIT
This model is based on the RoBERTa architecture and trained using the CrossEncoder class from SentenceTransformers, specifically designed for detecting duplicate questions on Quora.
Text Classification English
Q
navteca
845
0
Roberta Base Ca Cased Te
Apache-2.0
Catalan text entailment model based on RoBERTa architecture, fine-tuned on the TE-ca dataset
Text Classification
Transformers Other

R
projecte-aina
19
0
Roberta Base Few Shot K 128 Finetuned Squad Seed 42
MIT
A QA model fine-tuned on the SQuAD dataset using few-shot learning based on RoBERTa-base
Question Answering System
Transformers

R
anas-awadalla
19
0
Robbert V2 Dutch Ner
MIT
RobBERT is the state-of-the-art Dutch BERT model, pretrained on a large scale and adaptable to various text tasks through fine-tuning.
Large Language Model Other
R
pdelobelle
76.94k
3
Roberta Base Finetuned Squad2
MIT
A Q&A model fine-tuned on the SQuAD 2.0 dataset based on the RoBERTa-base model
Question Answering System
Transformers

R
mvonwyl
19
0
Roberta Base Japanese
A Japanese RoBERTa-based pretrained model, trained on Japanese Wikipedia and the Japanese portion of CC-100.
Large Language Model
Transformers Japanese

R
nlp-waseda
456
32
Bertin Base Gaussian
This is a Spanish fill-mask model based on the RoBERTa-base architecture, trained from scratch.
Large Language Model Spanish
B
bertin-project
16
0
Just A Test
This is a sentence similarity calculation model based on the RoBERTa architecture, supporting Chinese text processing
Text Embedding
J
osanseviero
15
0
Data2vec Nlp Base
Apache-2.0
Data2Vec NLP Base is a natural language processing model converted from the fairseq framework, suitable for tasks such as text classification.
Large Language Model
Transformers

D
edugp
14
0
Roberta2roberta L 24 Discofuse
Apache-2.0
An encoder-decoder model based on the RoBERTa architecture, specifically designed for sentence fusion tasks
Text Generation
Transformers English

R
google
102
2
Roberta Toxicity Classifier V1
A cloned version of the RoBERTa-based text toxicity classifier for evaluating text detoxification algorithm effectiveness
Text Classification
Transformers

R
s-nlp
139
0
Model All Distilroberta V1 30 Epochs
This is a sentence embedding model based on sentence-transformers, which can map text to a 768-dimensional vector space and is suitable for sentence similarity calculation and semantic search tasks.
Text Embedding
M
jfarray
10
0
Stsb Roberta Base
Apache-2.0
A cross-encoder based on RoBERTa-base for predicting semantic similarity scores (0-1) between two sentences.
Text Embedding English
S
cross-encoder
229.83k
4
Roberta Base Squad V1
MIT
This model is based on the RoBERTa architecture and fine-tuned on the SQuAD1.1 dataset, capable of extracting answers to questions from given contexts.
Question Answering System English
R
csarron
95
0
Uztext 3Gb BPE Roberta
Apache-2.0
Pretrained Uzbek (Cyrillic & Latin alphabets) masked language modeling and sentence prediction model
Large Language Model
Transformers Other

U
rifkat
25
7
Bertin Base Random
This is a model based on the RoBERTa-base architecture, fully trained from scratch using Spanish data, specializing in masked language modeling tasks.
Large Language Model Spanish
B
bertin-project
19
0
Sanberta
SanBERTa is a RoBERTa model trained on Sanskrit text, specifically designed for Sanskrit text processing tasks.
Large Language Model Other
S
surajp
15
2
S Biomed Roberta Snli Multinli Stsb
This is a sentence transformer model based on allenai/biomed_roberta_base, specifically fine-tuned for sentence similarity tasks, capable of mapping text to a 768-dimensional vector space.
Text Embedding
Transformers

S
pritamdeka
270
4
Sundanese Roberta Base
MIT
A Sundanese masked language model based on the RoBERTa architecture, trained on multiple datasets.
Large Language Model Other
S
w11wo
32
2
Japanese Roberta Base
MIT
A base-sized Japanese RoBERTa model trained by rinna Co., Ltd., suitable for masked language modeling tasks in Japanese text.
Large Language Model
Transformers Japanese

J
rinna
9,375
37
Roberta Base Few Shot K 1024 Finetuned Squad Seed 4
MIT
A QA model fine-tuned on the SQuAD dataset based on RoBERTa-base, suitable for reading comprehension tasks
Question Answering System
Transformers

R
anas-awadalla
19
0
Roberta Base Russian V0
This is a RoBERTa-like language model trained on partial data from the TAIGA corpus, primarily for Russian text processing.
Large Language Model Other
R
blinoff
109
8
Indonesian Roberta Base
MIT
Indonesian masked language model based on RoBERTa architecture, trained on the OSCAR corpus with a validation accuracy of 62.45%
Large Language Model Other
I
flax-community
1,013
11
Mrc Pretrained Roberta Large 1
KLUE-RoBERTa-large is a Korean pre-trained language model based on the RoBERTa architecture, developed by a Korean research team and optimized for Korean natural language processing tasks.
Large Language Model
Transformers

M
this-is-real
14
0
Tf Camembert Base
Advanced French language model based on the RoBERTa architecture, compatible with the TensorFlow framework
Large Language Model
Transformers

T
jplu
1,942
0
Guwenbert Large
Apache-2.0
A RoBERTa model pre-trained on classical Chinese, suitable for ancient text processing tasks
Large Language Model Chinese
G
ethanyt
217
10
Wangchanberta Base Wiki Newmm
A RoBERTa BASE model pretrained on Thai Wikipedia, suitable for Thai text processing tasks
Large Language Model Other
W
airesearch
115
2
Tajberto
TajBERTo is the first RoBERTa-like language model for the Tajik language, specifically designed for Tajik natural language processing tasks.
Large Language Model
Transformers Other

T
muhtasham
31
4
- 1
- 2
Featured Recommended AI Models